← Back to Contents
Note: This page's design, presentation and content have been created and enhanced using Claude (Anthropic's AI assistant) to improve visual quality and educational experience.
Week 4 • Sub-Lesson 4

⚖️ Applying Ethics: Case Studies & Your Framework

Practical ethical scenarios, structured analysis, and beginning your personal ethical framework

What We'll Cover

This final session for the week moves from theory to practice. We will introduce a structured decision framework for reasoning through AI ethics dilemmas, work through four detailed case studies using the lenses from Sub-Lesson 1, and begin building the personal ethical framework that forms part of your assessment.

The case studies are designed to be genuinely debatable — there are no obvious right answers. Each involves tensions where different ethical lenses produce different recommendations. That ambiguity is the point.

🧭 A Decision Framework for AI Ethics in Research

Before diving into case studies, here is a structured approach for reasoning through ethical dilemmas. It is not a formula — it is a way to ensure you have considered multiple perspectives before making a decision.

Six Steps for Ethical Reasoning About AI in Research

  1. Identify the dilemma. What is the specific ethical tension? Name it precisely. Often what appears to be one dilemma is actually several entangled ones.
  2. Identify stakeholders. Who is affected? Consider: the researcher, participants, co-authors, supervisors, students, the broader research community, the public, and communities whose data may have contributed to the AI system.
  3. Apply the four lenses. What does each framework say? Consequentialist (what are the likely outcomes?), deontological (what duties or rights are at stake?), virtue ethics (what kind of researcher am I becoming?), ubuntu/relational (how does this affect relationships and community?).
  4. Consider context. What are the relevant institutional policies, disciplinary norms, power dynamics, and resource constraints? An action that is straightforward in one context may be deeply problematic in another.
  5. Make a judgment. What will you actually do, and why? Which lens weighs most heavily in this case, and why? Be honest about trade-offs and uncertainties.
  6. Document and disclose. How will you record and communicate your reasoning? Transparency about the decision-making process is itself an ethical act.

💡 A Tool, Not a Formula

This framework will not give you an answer. It will give you a structured process for arriving at one. The value is in the reasoning — in forcing yourself to consider perspectives you might otherwise overlook — not in reaching a predetermined conclusion.

📋 Case Study 1: The Literature Review

📋 The Scenario

Dr. Amara, a postdoctoral researcher in public health, uses Claude to conduct a systematic literature review for a journal article. She provides her research question and inclusion criteria, and the AI identifies 150 relevant papers, extracts key findings, and drafts a narrative synthesis of the literature.

Dr. Amara checks a random sample of 20 papers against the AI's summaries and finds them accurate. She does not have time to verify all 150. She submits the literature review as part of a journal article, with a methods note stating: "AI tools were used to assist with literature search and synthesis."

Consequentialist Analysis

The positive outcomes are real: a more comprehensive review than Dr. Amara could have produced manually, completed in less time. But the unverified papers introduce risk — if any of the AI's summaries contain errors or fabricated citations (a known issue), the published review could mislead the field. The aggregate consequence of researchers routinely publishing AI-generated reviews they have not fully verified could erode the reliability of the scientific literature.

Deontological Analysis

A researcher has a duty to vouch for the accuracy of their published work. Submitting a review based on 130 unverified summaries is difficult to reconcile with this duty. The disclosure statement ("AI tools were used to assist") is technically true but arguably misleading — it does not convey the extent of AI involvement or the limited verification.

Virtue Ethics Analysis

Is Dr. Amara developing the deep familiarity with the literature that scholarly expertise requires? A systematic review is not just a product — it is a process through which the researcher builds mastery of their field. If AI does the intellectual work of synthesis, what is the researcher's contribution?

Ubuntu Analysis

How does this affect Dr. Amara's relationship with her research community? Readers trust that a published review represents the author's genuine engagement with the literature. Other researchers may build on this review, cite it, and make decisions based on it. The relational responsibility extends beyond Dr. Amara to everyone who relies on her work.

💬 Discussion Questions

Is checking 20 out of 150 papers sufficient due diligence? Where is the threshold? Is the disclosure statement adequate — should it specify the degree of AI involvement? Would your assessment change if Dr. Amara had checked all 150 summaries? What if she had no time constraints — does time pressure change the ethical calculus?

📋 Case Study 2: The Data Analysis

📋 The Scenario

Thabo, a PhD student at a university in Johannesburg, is conducting qualitative research on HIV stigma in peri-urban communities. He has 45 in-depth interview transcripts in a mixture of English, isiZulu, and Sesotho. He uses GPT-4 to assist with thematic coding — uploading anonymised transcripts and asking the AI to identify recurring themes and apply codes.

The AI produces a coherent coding framework that Thabo reviews and modifies. However, he notices that some of the AI's initial codes reflect categories more common in US and European HIV research than in the South African context — for example, framing stigma primarily through individual psychology rather than through community and family dynamics. He corrects the codes he notices, but is uncertain whether the overall coding framework has been subtly shaped by the AI's training data biases in ways he has not detected.

Thabo discloses AI use in his methods section.

Consequentialist Analysis

The coding assistance allows Thabo to handle a multilingual dataset that would otherwise require a research team he does not have access to. But if the AI's Western-centric categories have shaped the analysis in undetected ways, the findings may misrepresent the lived experience of the participants — with real consequences for how HIV stigma is understood and addressed in this context.

Deontological Analysis

Thabo has a duty to represent his participants' experiences accurately. Applying interpretive categories that may not fit their reality — even unknowingly — is in tension with this duty. He also has duties to his discipline: qualitative research depends on the researcher's deep engagement with the data, not delegation to an algorithm.

Virtue Ethics Analysis

Is Thabo developing the analytical skills that a qualitative researcher needs? The ability to immerse oneself in data, to recognise patterns that do not fit preexisting categories, and to let participants' voices shape the analysis — these are core competencies that AI assistance may shortcut rather than develop.

Ubuntu Analysis

This case is where ubuntu adds the most distinctive insight. The participants shared deeply personal experiences of stigma within specific community contexts. Applying Western-derived categories to their stories is a form of epistemic injustice — imposing external frameworks on lived experience. The relational obligation to participants goes beyond individual consent to include representing their experience in terms that honour their reality. The question is not just about data accuracy but about the relationship between researcher and community.

💬 Discussion Questions

How can Thabo assess whether the AI has introduced systematic bias into his coding framework? Does the fact that he is working with multilingual data from a non-Western context make AI assistance more or less risky? What would a responsible approach to AI-assisted qualitative coding look like in this context? How does the RIA Just AI Framework's concept of epistemic justice apply here?

📋 Case Study 3: The Grant Application

📋 The Scenario

Dr. Nkosi is an early-career academic at a South African university. English is her third language, and she has found grant writing in English particularly challenging. She uses Claude to substantially draft a grant application — providing her research ideas, preliminary data, and methodological approach, and asking the AI to produce a polished narrative. The resulting application is well-structured, persuasive, and reads fluently.

A senior colleague reviews the draft and comments that it is the strongest proposal she has seen from Dr. Nkosi. The funding body has no published policy on AI use in applications. Dr. Nkosi submits the application without disclosing AI use.

Consequentialist Analysis

If the research itself is strong — good ideas, solid methodology, clear potential impact — does it matter how the proposal was written? The funder wants to support good research, and Dr. Nkosi's research may genuinely deserve funding. But if AI-polished applications become the norm without disclosure, researchers without AI access are systematically disadvantaged. The competitive landscape is distorted.

Deontological Analysis

No explicit policy exists — does the absence of a rule make it acceptable? Grant applications carry an implicit expectation that they represent the applicant's own scholarly voice and capability. Even without a formal rule, there may be an implicit duty of honest self-representation. The funder is partly assessing the researcher's communication ability alongside their research ideas.

Virtue Ethics Analysis

Is Dr. Nkosi developing the grant-writing skills that her career will require? Grant writing is a core academic competence. If AI does this work, Dr. Nkosi may secure funding now but remain unable to write independently. On the other hand, learning from well-structured AI output could itself be a form of skill development — a more generous reading.

Ubuntu Analysis

This case introduces a fairness dimension that ubuntu is particularly well-suited to analyse. Dr. Nkosi faces a structural disadvantage — writing in her third language in a system that privileges native English speakers. AI partially levels this uneven playing field. From an ubuntu perspective, the question is whether the academic community's reliance on English-language fluency as a proxy for research quality is itself unjust. Does AI use in this context restore a more equitable relationship between Dr. Nkosi and the funding system?

💬 Discussion Questions

Does the language equity dimension change your ethical assessment compared to a native English speaker using AI for the same purpose? Should funders require AI disclosure in applications? Should they prohibit AI use, permit it, or actively encourage it as an equity tool? What would happen if Dr. Nkosi disclosed AI use — would it help or harm her application?

📋 Case Study 4: The Student Project

📋 The Scenario

Fatima, a master's student in environmental science, uses Claude to generate Python code for statistical analysis of her research data on water quality in the Western Cape. The AI produces working code for her planned regression analysis. But the AI also suggests an alternative analytical approach — a mixed-effects model that accounts for spatial autocorrelation — which Fatima had not considered. This alternative approach produces results that are more favourable to her hypothesis.

Fatima adopts the AI-suggested approach. She understands the code at a general level but is not confident she could explain the mathematical details of the mixed-effects model. She discloses in her methods: "Statistical analysis was conducted using Python code generated with assistance from Claude (Anthropic). The mixed-effects modelling approach was suggested by the AI assistant."

Consequentialist Analysis

The mixed-effects model may genuinely be more appropriate for the data — spatial autocorrelation is a real methodological concern for environmental data. If so, the AI has improved the analysis. But adopting a method you do not fully understand introduces risk: if reviewers question the approach, can Fatima defend it? If the model contains subtle errors or inappropriate assumptions, will she detect them?

Deontological Analysis

Does Fatima have a duty to fully understand every analytical method she employs? Research methodology requires that the researcher can justify their choices. Adopting a method because an AI suggested it — especially when it produces more favourable results — is uncomfortably close to p-hacking or outcome-driven methodology selection, even if the suggestion is technically sound.

Virtue Ethics Analysis

A master's degree is fundamentally about developing competence. If Fatima cannot explain the mathematical foundations of her chosen analytical approach, has she developed the competence that the degree is meant to certify? The disclosure is commendable, but disclosure does not resolve the competence question.

Ubuntu Analysis

Fatima's supervisor and examiners need to be able to trust that she understands her own research. If she cannot explain or defend her analytical choices, the examination relationship is undermined. More broadly, her peers — who may be doing their analyses without AI assistance — are held to a different standard of methodological understanding.

💬 Discussion Questions

Is it ever appropriate to adopt an analytical approach you do not fully understand? Where is the line between "using a tool" and "not understanding your own research"? Does the fact that the AI-suggested approach produces more favourable results change the ethical assessment? Is Fatima's disclosure adequate? What responsibility does her supervisor have in this situation?

📝 Beginning Your Personal Ethical Framework

One component of the course assessment (20%, due Week 12) is developing a personal ethical framework for AI use in your research domain. This is not a one-off assignment — it should evolve as your understanding develops across the course. This week, you begin.

Building Your Framework: A Starting Structure

  1. Identify your AI use cases. What are the specific ways you might use (or are already using) AI in your research? Be concrete: literature search, writing assistance, data analysis, coding, translation, brainstorming, etc.
  2. Map the ethical tensions. For each use case, what are the key ethical tensions? Consider: transparency, competence, bias, privacy, fairness, relationships with participants and community.
  3. Choose your lenses. Which ethical framework(s) do you find most useful for each tension, and why? You are not required to use all four for every question — the point is to be thoughtful about which perspective is most illuminating in each case.
  4. Draft your principles. Write a set of personal principles or guidelines for AI use in your research. These should be specific enough to guide real decisions, not so general as to be meaningless.
  5. Name your uncertainties. Where are you genuinely unsure? What questions remain open? Honest uncertainty is more valuable than false confidence.

💡 This Is a First Draft

Your ethical framework will be revisited and refined throughout the course as you encounter new topics — AI-assisted literature review (Week 5), AI for writing (Week 6), AI for data analysis (Week 7), and AI for coding (Week 8) will all introduce new ethical dimensions. The goal this week is to establish a foundation, not to produce a finished document. Expect your framework to evolve significantly.

💡 Weekly Assessment

Ethical Analysis (800 words): Describe one real or realistic ethical dilemma involving AI in your research domain. Apply two different ethical frameworks from those covered this week. Explain what you would actually do, and why. Use the six-step decision framework as your structure.

This is not a theoretical exercise — choose a dilemma you might genuinely face. The best submissions will demonstrate genuine engagement with the tension, not just a mechanical application of frameworks.

🔎 The Journal Policy Audit Activity

An in-class activity to ground the transparency discussion in your own disciplinary context.

📊 Structured Audit Guide

Choose the top 5 journals in your field (or the 5 journals you are most likely to submit to). For each, investigate and record:

  • Does the journal have an AI/LLM policy? If yes, when was it last updated? If no, note its absence — that is also informative.
  • What disclosure is required? Where should it appear (methods, acknowledgements, cover letter)? How specific must it be?
  • What are the authorship rules? Can AI be listed as an author? As a contributor? Must AI contributions be specified in contribution statements?
  • What is ambiguous or missing? Are there grey areas the policy does not address? Common gaps include: AI use in peer review, AI use for data analysis vs. writing, AI use in revisions after initial submission.
  • How does it compare to other journals? Are the policies in your field more or less restrictive than those of Nature, Science, or the ACM?

Bring your findings to class — we will compare across disciplines and discuss what the patterns reveal about your field's relationship with AI.

⚠️ Check the Latest Version

Many journals have updated their AI policies multiple times since early 2023. Check the most recent version of author guidelines directly on the journal's website — do not rely on cached versions, secondary summaries, or policies from previous years.

📚 Week 4 Summary: Ethical Frameworks for AI in Research

This week provided you with the conceptual tools and practical knowledge for navigating the ethical dimensions of AI in research.

  • Four philosophical lenses — consequentialism, deontology, virtue ethics, and ubuntu — provide complementary perspectives. Their disagreement is a feature, not a bug
  • Ubuntu and relational ethics offer insights that individualist frameworks miss — about collective harm, structural injustice, and the social fabric that research communities depend on
  • The RIA Just AI Framework moves beyond "responsible AI" to demand restorative and redistributive justice, with nine core inquiries spanning human rights, epistemic justice, data justice, and sustainability
  • Disclosure is the practical foundation of ethical AI use — and norms are converging toward more transparency, not less
  • Bias, privacy, authorship, and integrity are not solved problems — they require ongoing attention and judgment as the technology and its governance evolve
  • Personal ethical frameworks should be living documents — specific enough to guide real decisions, honest enough to acknowledge genuine uncertainty
  • Case studies reveal that ethical dilemmas in AI-assisted research rarely have simple answers — the value is in the reasoning, not in reaching a predetermined conclusion
  • The landscape is vast: Labour exploitation, surveillance AI, autonomous weapons, corporate power, deepfakes, gender, disability, and many more dimensions of AI ethics extend beyond what we could cover this week. See The Broader Landscape of AI Ethics for an orientation and further reading

Next week (Week 5): We move from ethical foundations to practical application — AI-assisted literature review. How can AI help you find, organise, and synthesise research literature? What are the risks, and how do you use these tools responsibly?